168 research outputs found

    Probabilistic framework for image understanding applications using Bayesian Networks

    Get PDF
    Machine learning algorithms have been successfully utilized in various systems/devices. They have the ability to improve the usability/quality of such systems in terms of intelligent user interface, fast performance, and more importantly, high accuracy. In this research, machine learning techniques are used in the field of image understanding, which is a common research area between image analysis and computer vision, to involve higher processing level of a target image to make sense of the scene captured in it. A general probabilistic framework for image understanding where topics associated with (i) collection of images to generate a comprehensive and valid database, (ii) generation of an unbiased ground-truth for the aforesaid database, (iii) selection of classification features and elimination of the redundant ones, and (iv) usage of such information to test a new sample set, are discussed. Two research projects have been developed as examples of the general image understanding framework; identification of region(s) of interest, and image segmentation evaluation. These techniques, in addition to others, are combined in an object-oriented rendering system for printing applications. The discussion included in this doctoral dissertation explores the means for developing such a system from an image understanding/ processing aspect. It is worth noticing that this work does not aim to develop a printing system. It is only proposed to add some essential features for current printing pipelines to achieve better visual quality while printing images/photos. Hence, we assume that image regions have been successfully extracted from the printed document. These images are used as input to the proposed object-oriented rendering algorithm where methodologies for color image segmentation, region-of-interest identification and semantic features extraction are employed. Probabilistic approaches based on Bayesian statistics have been utilized to develop the proposed image understanding techniques

    Design Data Warehouse for Medical Data

    Get PDF
    Organizing and managing the database relations in term of data warehouse technology has been addressed widely in different complex environments. The data warehouse contains a source of valuable data mining. The data contained in the data warehouse is cleaned, integrated, and organized. This study highlighted the existing issues on the medical databases which present a huge number of information across various departments, managing this type of data require time, and laborious tasks to separately access and integrate reliably. Hence, this study aimed to model new medical data warehouse architecture for managing and organizing the medical dataset operation into data warehouse. Technically OLAP has been used to design the proposed architecture, for the hospitable administrators, and top manager and/or sophisticated user can use MDW by using Microsoft SQL Server 2005. Building the proposed architecture adopted by using Microsoft Visual Studio for performing the OLE database operations. The performing process has been tested through the using of use test case technique

    Investigating The Key Factors Effecting The Use Of Telemedicine In Iraqi Hospitals

    Get PDF
    The weakness of information sharing has appeared clearly with the events of 11th of Sep 2001 that caused cannot stop and prevent the attacks of terrorist. Recently, a prevalent relationship between information sharing and intelligence in the context of counter-terrorism. A few studies have been conducted in this domain by Western countries whilst, none studies done with countries which have effected directly with terrorist attacks especially the Middle East. Issues with information sharing in intelligence domain are still significant challenges to cover. Nevertheless, literature showed there is no single model combined with the technology, information sharing and human factors with an empirical gap in this field, to determine what the intelligence need to develop non-failure intelligence product. This study aims to analysis the technology gap that focuses on fully supporting the common requirements of information sharing in Iraqi intelligence through propose an electronic information sharing model adopted based on Layered Behavioral Model. The fourteen factors are employed in five layers included, Policies and Political Constraints as an Environmental Layer, Compatibility, Information Quality, and Common Data Repository as an Organisation Layer, Cost, Expected Benefits, and Expected Risk as an Information Fusion Center Layer, Technology Capability, Top Management Support, and Coordination as a Readiness Layer, and the last factor in Individual Layer are Trust, Information Stewardship, and Information Security. A quantitative method employed to achieve a broader background of the phenomenon under investigation and to address a broader range of attitude and behavioural issues. This method was a statistical approach in testing the proposed research hypotheses for the factors. From the empirical testing point, found that Policies, Compatibility, Common Data Repository, Cost, Expected Benefits, Expected Risk, Technology Capability, Top Management Support, Trust, Information Stewardship, and Information Security had a significant influence on the degree of electronic information sharing. Whereas, Political Constraints, Information Quality, and Coordination had no significant influence on the degree of electronic information sharing. Several contributions of this study are, create a new theoretical model for the electronic information sharing within intelligence domain. Enhances existing literature by expanding upon layers and factors that are affecting in two dimensions are, electronic information sharing and intelligence. Add new vision to develop information fusion center in the context of electronic information sharing. Reduce the gap of the empirical study in intelligence sectors. And provide a formal strategy and creation a series of the guidelines for Iraqi intelligence authorities to govern E-information sharing activities

    Word Sense Disambiguation for clinical abbreviations

    Get PDF
    Abbreviations are extensively used in electronic health records (EHR) of patients as well as medical documentation, reaching 30-50% of the words in clinical narrative. There are more than 197,000 unique medical abbreviations found in the clinical text and their meanings vary depending on the context in which they are used. Since data in electronic health records could be shareable across health information systems (hospitals, primary care centers, etc.) as well as others such as insurance companies information systems, it is essential determining the correct meaning of the abbreviations to avoid misunderstandings. Clinical abbreviations have specific characteristic that do not follow any standard rules for creating them. This makes it complicated to find said abbreviations and corresponding meanings. Furthermore, there is an added difficulty to working with clinical data due to privacy reasons, since it is essential to have them in order to develop and test algorithms. Word sense disambiguation (WSD) is an essential task in natural language processing (NLP) applications such as information extraction, chatbots and summarization systems among others. WSD aims to identify the correct meaning of the ambiguous word which has more than one meaning. Disambiguating clinical abbreviations is a type of lexical sample WSD task. Previous research works adopted supervised, unsupervised and Knowledge-based (KB) approaches to disambiguate clinical abbreviations. This thesis aims to propose a classification model that apart from disambiguating well known abbreviations also disambiguates rare and unseen abbreviations using the most recent deep neural network architectures for language modeling. In clinical abbreviation disambiguation several resources and disambiguation models were encountered. Different classification approaches used to disambiguate the clinical abbreviations were investigated in this thesis. Considering that computers do not directly understand texts, different data representations were implemented to capture the meaning of the words. Since it is also necessary to measure the performance of algorithms, the evaluation measurements used are discussed. As the different solutions proposed to clinical WSD we have explored static word embeddings data representation on 13 English clinical abbreviations of the UMN data set (from University of Minnesota) by testing traditional supervised machine learning algorithms separately for each abbreviation. Moreover, we have utilized a transformer-base pretrained model that was fine-tuned as a multi-classification classifier for the whole data set (75 abbreviations of the UMN data set). The aim of implementing just one multi-class classifier is to predict rare and unseen abbreviations that are most common in clinical narrative. Additionally, other experiments were conducted for a different type of abbreviations (scientific abbreviations and acronyms) by defining a hybrid approach composed of supervised and knowledge-based approaches. Most previous works tend to build a separated classifier for each clinical abbreviation, tending to leverage different data resources to overcome the data acquisition bottleneck. However, those models were restricted to disambiguate terms that have been seen in trained data. Meanwhile, based on our results, transfer learning by fine-tuning a transformer-based model could predict rare and unseen abbreviations. A remaining challenge for future work is to improve the model to automate the disambiguation of clinical abbreviations on run-time systems by implementing self-supervised learning models.Las abreviaturas se utilizan ampliamente en las historias clínicas electrónicas de los pacientes y en mucha documentación médica, llegando a ser un 30-50% de las palabras empleadas en narrativa clínica. Existen más de 197.000 abreviaturas únicas usadas en textos clínicos siendo términos altamente ambiguos El significado de las abreviaturas varía en función del contexto en el que se utilicen. Dado que los datos de las historias clínicas electrónicas pueden compartirse entre servicios, hospitales, centros de atención primaria así como otras organizaciones como por ejemplo, las compañías de seguros es fundamental determinar el significado correcto de las abreviaturas para evitar además eventos adversos relacionados con la seguridad del paciente. Nuevas abreviaturas clínicas aparecen constantemente y tienen la característica específica de que no siguen ningún estándar para su creación. Esto hace que sea muy difícil disponer de un recurso con todas las abreviaturas y todos sus significados. A todo esto hay que añadir la dificultad para trabajar con datos clínicos por cuestiones de privacidad cuando es esencial disponer de ellos para poder desarrollar algoritmos para su tratamiento. La desambiguación del sentido de las palabras (WSD, en inglés) es una tarea esencial en tareas de procesamiento del lenguaje natural (PLN) como extracción de información, chatbots o generadores de resúmenes, entre otros. WSD tiene como objetivo identificar el significado correcto de una palabra ambigua (que tiene más de un significado). Esta tarea se ha abordado previamente utilizando tanto enfoques supervisados, no supervisados así como basados en conocimiento. Esta tesis tiene como objetivo definir un modelo de clasificación que además de desambiguar abreviaturas conocidas desambigüe también abreviaturas menos frecuentes que no han aparecido previamente en los conjuntos de entrenaminto utilizando las arquitecturas de redes neuronales profundas más recientes relacionadas ocn los modelos del lenguaje. En la desambiguación de abreviaturas clínicas se emplean diversos recursos y modelos de desambiguación. Se han investigado los diferentes enfoques de clasificación utilizados para desambiguar las abreviaturas clínicas. Dado que un ordenador no comprende directamente los textos, se han implementado diferentes representaciones de textos para capturar el significado de las palabras. Puesto que también es necesario medir el desempeño de cualquier algoritmo, se describen también las medidas de evaluación utilizadas. La mayoría de los trabajos previos se han basado en la construcción de un clasificador separado para cada abreviatura clínica. De este modo, tienden a aprovechar diferentes recursos de datos para superar el cuello de botella de la adquisición de datos. Sin embargo, estos modelos se limitaban a desambiguar con los datos para los que el sistema había sido entrenado. Se han explorado además representaciones basadas vectores de palabras (word embeddings) estáticos para 13 abreviaturas clínicas en el corpus UMN en inglés (de la University of Minnesota) utilizando algoritmos de clasificación tradicionales de aprendizaje automático supervisados (un clasificador por cada abreviatura). Se ha llevado a cabo un segundo experimento utilizando un modelo multi-clasificador sobre todo el conjunto de las 75 abreviaturas del corpus UMN basado en un modelo Transformer pre-entrenado. El objetivo ha sido implementar un clasificador multiclase para predecir también abreviaturas raras y no vistas. Se realizó un experimento adicional para siglas científicas en documentos de dominio abierto mediante la aplicación de un enfoque híbrido compuesto por enfoques supervisados y basados en el conocimiento. Así, basándonos en los resultados de esta tesis, el aprendizaje por transferencia (transfer learning) mediante el ajuste (fine-tuning) de un modelo de lenguaje preentrenado podría predecir abreviaturas raras y no vistas sin necesidad de entrenarlas previamente. Un reto pendiente para el trabajo futuro es mejorar el modelo para automatizar la desambiguación de las abreviaturas clínicas en tiempo de ejecución mediante la implementación de modelos de aprendizaje autosupervisados.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Israel González Carrasco.- Secretario: Leonardo Campillos Llanos.- Vocal: Ana María García Serran

    The Global Project of the Professor Taha Jabir Al-Alwani: Enriching and Reviewing the Islamic Experience in International Relations

    Get PDF
    This paper tracks the development of a scholar whose writings have renewed the civilizational studies on global and humanitarian levels through universal unification, humanitarian recommendation, and psychological reconstruction. The impact of this intellectual renewal in international relations is assessed in this study. The first part evaluates the Islamic experience in international relations by considering the values and globalism inherent to international relations concepts. In this context, the study considers the book of Al-Diyar Jurisprudence interpretation and its impact on classifying people in terms of their beliefs. It reviews the impact of these beliefs on the “human unity”. It also underscores the reconsideration of the earth as a single home for a single family and how people need to be classified based on coexistence. This is followed by a consideration of the Holy Quran a source of judgement, where renewal has reinstated the Holy Book in terms of content, revelation, an address to all people, a source of their values, and as a means of shaping globalism. The second section offers a critique of western civilization, its knowledge patterns, and the call for a globalized western civilization model. The third section considers the nature of post-Cold War characteristics, including: the impact of the internet revolution on closeness among world communities and the justification for a global residence; the cultural impact of these developments, and the influence of cross-national factors in international relations such as religious movements. Keywords: International Political Development, Global value matrix, globalism, succession on earth, human rights, Foreign Relations, globalizatio

    Identification and Ranking of Relevant Image Content

    Get PDF
    The work in this thesis proposes an image understanding algorithm for automatically identifying and ranking different image regions into several levels of importance. Given a color image, specialized maps for classifying image content namely: weighted similarity, weighted homogeneity, image contrast and memory color maps are generated and combined to provide a perceptual importance map. Further analysis of this map yields a region ranking map which sorts the image content into different levels of significance. The algorithm was tested on a large database that contains a variety of color images. Those images were acquired from the Berkeley segmentation dataset as well as internal images. Experimental results show that our technique matches human manual ranking with 90% efficiency. Applications of the proposed algorithm include image rendering, classification, indexing and retrieval. Adaptive compression and camera auto-focus are other potential applications

    REST compliant clients for REST APIs

    Get PDF
    In today's distributed systems, REST services play a centric role in defining applications' architecture. Current technologies and literature focus on building server-side REST applications. But they fail to build generic and REST compliant client solutions. Therefore, most offered services and especially client applications rarely comply to the constraints that constitute the REST architecture. In this thesis, the architecture of a new generic framework for building REST compliant client applications is introduced. In addition, a new description language that conforms to REST's constraints and helps reduce development time is presented. We describe in this work the building-blocks of the proposed solutions and show a software implementation of a library that leverages the solutions' architectures. Using the proposed framework and description language, client applications that conform to the full set of REST's constraints can be built in an easy and optimized way. In addition, REST service providers can rely on the proposed description language to eliminate the complexity of repetitively building customized solutions for different technologies or platforms

    Disambiguating Clinical Abbreviations using Pre-trained Word Embeddings

    Get PDF
    Thanks to Palestine Technical University-Kadoorie and Deep EMR project(TIN2017-87548-C2-1-R)for partially funding this work

    Barriers faces telemedicine implementation in the developing countries: toward building Iraqi telemedicine framework

    Get PDF
    The Iraqi healthcare services are tussling get possession of lost momentum. Many professional physicians and nurses left Iraq because the current situation there. In spite of the plans of calling back the skilled health workforce but they still afraid of disadvantage of their return. Hence, technology plays a central role to take advantage of their profession through the use of telemedicine. Thus it is the need to study the factors that effects the implementation of telemedicine that covers network services, policy makers and patient understanding. These papers shows the issue that faces the implementation of telemedicine and analyze the literature of previous telemedicine in Middle East countries to find out the essential factors toward building Iraqi telemedicine framewor
    corecore